Real-time Musical Interaction between Musician and Multi-agent System
نویسنده
چکیده
The application of emergent behavior of multi-agent system to musical creation, such as controlling parameters of sound synthesizer and composition, has attracted interest recently. Although human control or programmed operation works properly, it is very complicated and seems overly monotonic. One of the features of a multi-agent system, self-organization, is suitable for controlling parameters of synthesizer and generating compositional rules. Furthermore, the system has the possibility to generate unexpected sounds and musical pieces in ways that an experienced musician would never try to generate. In this paper, we report a research on a musical computer system, which generates synthesizer sounds and musical melodies by means of the multi-agent and interacts with human piano players in real-time. We show empirically that our interactive system plays attracting sounds. We also demonstrate that a human player feels that the interaction between the system and himself is very reliable.. 1. Inroduction The problem of techniques, deterministic algorithmic composition made by only strict rules which succeeded to traditional composition is that no possibilities that can generate unexpected besides wonderful results. One of the purpose for using computers in musical creation is to generate unexpected results that cannot be generated with traditional ways. Therefore stochastic composition techniques, using random values generated by computer as parameters of composition, has been widely used. L. Hiller and L. Isaacson composed a suite named "Illiac Suite" with very early computer "ILLIAC" using the famous stochastic technique Monte-Carlo [1], and I. Xenakis generated peculiar sound called "Sound Cloud" in his pieces by stochastic technique with computer[2]. C. Roads collected famous stochastic techniques and gave a detailed explanation of them in his book[3]. 8 Generative Art Conference GA2005 Page 2 Although, ordinary stochastic composition using random values simply has a problem. The problem is that the output results from computers cannot be used for composition directly without the revision of composers or programming very strict rules for random value generation. Because of that each random output value has no relationship to each other. In general, correlations between each parameter are important in composition and sound synthesis. Thus directly using output for composition generates sometime inaudible sounds, also sometime no unbearable as a music, and often not interesting sounds. Consequently, application of an "Self-Organization" of Multi-Agent systems, such as Cellular-Automata (CA) and Artificial Life (AL, ALife), in musical creation has lately attracted considerable attention. The relationship of each parameter is important in musical creation, as mentioned before. The self-organization property bears the possibility of generating correlated parameters. In general, emergent behaviors of CA can control parameter sets for composition and sound synthesis dynamically whereas eliminating the need for the revision of computer outputted values and manual arrangement. Furthermore, recent research shows that traditional composition models of classical music can be explained with the behaviors of multi-agent systems. Ordinary stochastic techniques used today, which involves applying random values to each musical or sound synthesis parameters, usually produces non-musical notes or inaudible sounds. Therefore composer should revise the output values from computer or put strict restriction rules when generating random values in order to obtain applicable results. In contrast, selforganization property applied to music composition and sound synthesis reduces the possibility of generating chunk data in musical terms. Some composers and researchers have tried to apply emergent behaviors of Multi-Agent system to composition and sound synthesis. P. Bayls and D. Millen attempted to map CA's each cell state to pitch, duration and timbre of musical notes[4,5]. E. Miranda noticed selforganizing functions of CA strongly, Then he constructed the composition and sound synthesis softwere tools called CAMUS and Caosynth[6,7]. In the Caosynth system, he developed mapping to apply CA to controlling granular sound synthesis. P. Dahlsted has been interested in behaviors of evolution of natural creatures. He made some simulation systems consisted of many creatures which play sounds, walk, eat and interacts with each other, even die and evolve in world. Finally developed mapping for the behaviors of creatures to sounds and composition[8,9]. He also has used IEC (Interactive Evolutionary Computation) actively for his composition based on the point of views of multi-agent evolution system[10-12]. 2. Interaction between live computer system with human players As we have mentioned in section 1, application of Multi-Agent systems to musical creation has yielded promising results. The self-organization function is very useful for controlling compositional and sound synthesis parameters dynamically. Dynamic sounds alternation is required especially in sound synthesis of recent live computer music which generates melodies and sound in real-time. Besides, techniques of dynamic controlling of parameters to generate interesting melodies and sounds in interactive live computer music, such as capturing human 8 Generative Art Conference GA2005 Page 3 player's sound or performance information and computing interactive melodies and sounds, has become increasingly important in recent years. This is due to the new sound synthesis techniques, such as granular synthesis and granular sampling require huge set of parameters, and also melody generation in computers. It is very difficult to apply random values to these parameters directly in real-time manually, because that many case pure random parameter sets generate unmusical sounds with complicated synthesizer and composition algorithms. An ideal type of parameter sets is that each parameter alternates in time line and bears a strong correlation to each other. Therefore self-organization of each parameters works very effectively when applied to sound synthesis and melody generation in real-time. Meanwhile, musical trials of real-time interaction between human players and computer system in musical works has also drawn considerable attention. Human’s musical performance contains a huge amount of information, e.g. dynamics of articulation, tempo alternations in a melody and etc. Dynamical construction of sounds and melodies based on the captured musical information has been the typical way of creating live interactive musical works. These techniques are very exciting from the viewpoint of both performer and composer. However, programming of rules for sound and melody synthesis part is very complicated for real-time interaction. Because of the musical information captured highly depends on the condition and the mood of the performer. Also sound and melody synthesis require huge parameter set which have strong interdependence, besides dynamic alternation of parameters is needed to make unique sounds, but system operator is not allowed to control the parameters in detail. We thought that realizing musical interactions utilizing self-organization function helps the realtime parameter control, moreover performer and composer can get more interesting and musical sounds by means of self-organization of multi-agent system. We mentioned above, some musical research and works done in the area of multi-agent systems. In these works, processing of multi-agent system and composition or sound synthesis is non real-time, results from the fact that processing cost of multi-agent system is large, so that its not feasible to process in real-time. However, in recent years, processing power of computers has increase dynamically. For 20 years ago, we could not imagine that real-time sound processing in multi-purpose processors with very small portable computers is achievable. And also, today we get the processing power to simulate any kinds of simple multi-agent system such as small CA and ant colony. Thus, now we have many possibilities to realize interaction between computer system utilizing self-organized functions and human performers. For these matters, mentioned above, we tried to construct a multi-agent system which interacts with human player in musical terms. Then we observed interesting musical communication that the power of self-organization of multi-agent system and human players in live piece. 3. System Construction 3.1 Overview of the system Our system consists of two computers, first one is for running the multi-agent system, and the next is for composition and sound synthesis. Two computer are connected via Ethernet with 8 Generative Art Conference GA2005 Page 4 "OpenSound Control" protocol[13]. This enables a simple setup for the realization of a distributed processing environment. On a single computer there is the possibility of interference since the execution of multi-agent system requires high computational power which may lead to generation of noise or interruption in the sound synthesis task. OpenSound Control is widely used protocol for real-time live music creation, and in order the system to connect to other live computer systems. For the realization of the multi-agent system, we adopted "Swarm Libraries"[14]. Swarm Libraries is a software package for multi-agent simulation of complex systems, originally developed in Santa Fe Institute. we introduced networking function with OpenSound Control to Swarm Libraries, then implemented functions to send and receive musical messages from other software. For composition and sound synthesis, "Max" clones, "Max/MSP" and "PureData" was used. Max and Max clones are powerful software for real-time sound synthesis and control algorithmic composition. Also we have attempted to introduce other sound software to our system. The order of processing is as follows: 1. Performance information of human player is captured with Max as sound signal or MIDI data. 2. The information is analyzed with rule sets that composer and performer programmed, then send to the next computer which runs the multi-agent system via network. 3. Multi-agent system, such as CA include the messages extracted from performance information, change states of each agent in the virtual world based on contents of the message. 4. At predetermined intervals, return information of agents states to music computer via network. 5. Music computer processes melodies or sound synthesis with informations from multi-agent system, and publish with the speaker or send to an other instrument connected via MIDI. Fig.1. Distributed processing environment of our system 8 Generative Art Conference GA2005 Page 5 2.2 Examples of multi-agent behaviors, generating melody and sound synthesis rules Purpose of this research is an attempt to apply multi-agent system to actual musical creation. Multiple behavioral rules for the multi-agent system are adopted in our system instead of restricting ourselves to a single rule. The idea behind this is that in practice composition of a musical piece accommodates several rules in sound and melody generation. Many algorithms are used for melody generation and sound synthesis while composing a piece in order to give a feeling of dynamic movement and represent a developing musical structure. For generating behavioral rules for the multi-agent system in use, several rules are implemented such as 1D, 2D CA, Boids and Ant Colony. Boids is a simulation model for mimicking the movement of flocks and herds in which the behavior for each member of the group is governed by simplistic rules. On the other hand, Ant Colony simulates the behavioral models of and colonies for foraging tasks. Fig. 2: Multi-agent system implemented. The 2D CA rule Conway’s Game of Life i s running. 8 Generative Art Conference GA2005 Page 6 In order to map the behaviors emerged in the simulations to musical melody generation and sound synthesis several mappings are produced. For instance, in the case of Boids simulation, each agent's behavioral exposition and location data are connected to a set of a white noise oscillators and a band pass filter. Specifically, agent's behavior, lateral position in X line, vertical position in Y line and moving speed are mapped to the three parameters of band pass filter, center frequency, bandwidth and position between the left-right channels, respectively. Moreover, during the musical performance, the frequency of key presses of piano is adjusted based on the agents moving speed. This enables a dynamically alternating sound cluster. Another metric employed in this approach is the distance between any two agents which is used for stretching the musical note created by the agents. 4. Composition and Performance of a Piece
منابع مشابه
Virtualband: Interacting with Stylistically Consistent Agents
VirtualBand is a multi-agent system dedicated to live computer-enhanced music performances. VirtualBand enables one or several musicians to interact in real-time with stylistically plausible virtual agents. The problem addressed is the generation of virtual agents, each representing the style of a given musician, while reacting to human players. We propose a generation framework that relies on ...
متن کاملScore Following: An Artificially Intelligent Musical Accompanist
Score Following is the process by which a musician can be tracked through their performance of a piece, for the purpose of accompanying the musician with the appropriate notes. This tracking is done by following the progress of the musician through the score (written music) of the piece, using observations of the notes they are playing. Artificially intelligent musical accompaniment is where a ...
متن کاملMusical acts and musical agents : theory, implementation and practice
Musical Agents are an emerging technology, designed to provide a range of new musical opportunities to human musicians and composers. Current systems in this area lack certain features which are necessary for a high quality musician; in particular, they lack the ability to structure their output in terms of a communicative dialogue, and reason about the responses of their partners. In order to ...
متن کاملThe musical language Elements of Persian musical language: modes, rhythm and syntax
In treating the subject of musical language, a Persian musician would be intrinsically drawn to the structural similarities between the Persian music and language. Indeed Persian music and language are extremely related in their metrics, intonations and structural phrases (syntax). Although we will draw upon this relationship, our aim in this article is to present “music as a language,” c...
متن کاملOMax Brothers : a Dynamic Topology of Agents for Improvization Learning
We describe a multi-agent architecture for an improvization oriented musician-machine interaction system that learns in real time from human performers. The improvization kernel is based on sequence modeling and statistical learning. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate to...
متن کامل